Fix vector search error and integrate llama-index for RAG operations#17
Merged
Fix vector search error and integrate llama-index for RAG operations#17
Conversation
…d incremental indexing Co-authored-by: Mte90 <403283+Mte90@users.noreply.github.com>
…in UI Co-authored-by: Mte90 <403283+Mte90@users.noreply.github.com>
Co-authored-by: Mte90 <403283+Mte90@users.noreply.github.com>
…ter, show total files Co-authored-by: Mte90 <403283+Mte90@users.noreply.github.com>
Co-authored-by: Mte90 <403283+Mte90@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Fix vector search failure in web UI
Fix vector search error and integrate llama-index for RAG operations
Nov 11, 2025
Member
|
@copilot you wrote a lot of code and replaced embeddingclient but the code is still there, if you replace something I need that you do a cleanup of the old code |
Copilot stopped work on behalf of
Mte90 due to an error
November 11, 2025 11:36
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Vector search was failing with "unable to retrieve context" because file content wasn't being retrieved from the database and passed to the LLM, making RAG ineffective.
Core Fix
search_semantic()callsget_chunk_text()to fetch actual code chunks, not just metadatainclude_contentflag (de facto always true for RAG to work)llama-index Migration
Created three new components:
OpenAICompatibleEmbedding- wraps OpenAI API for llama-indexchunk_with_llama_index- uses CodeSplitter (code-aware) and SentenceSplitterSQLiteVectorStore- bridges llama-index VectorStore interface with sqlite-vector backendBefore:
After:
UI Enhancements
Performance
Cleanup
Removed ~600 lines of obsolete code:
EmbeddingClientclass (replaced by llama-index)smart_chunker.py(replaced by CodeSplitter/SentenceSplitter)chunk_textutility (replaced by llama-index)include_contentparameter (always needed)Backwards Compatibility
No database schema changes. Existing projects continue to work. New indexes store
total_filesmetadata for performance.Original prompt
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.